iT邦幫忙

2024 iThome 鐵人賽

DAY 27
0

題目

Questions

Q27

A company wants to implement real-time analytics capabilities. The company wants to use Amazon Kinesis Data Streams and Amazon Redshift to ingest and process streaming data at the rate of several gigabytes per second. The company wants to derive near real-time insights by using existing business intelligence (BI) and analytics tools. Which solution will meet these requirements with the LEAST operational overhead?

  • [ ] A. Use Kinesis Data Streams to stage data in Amazon S3. Use the COPY command to load data from Amazon S3 directly into Amazon Redshift to make the data immediately available for real-time analysis.
  • [ ] B. Access the data from Kinesis Data Streams by using SQL queries. Create materialized views directly on top of the stream. Refresh the materialized views regularly to query the most recent stream data.
  • [x] C. Create an external schema in Amazon Redshift to map the data from Kinesis Data Streams to an Amazon Redshift object. Create a materialized view to read data from the stream. Set the materialized view to auto refresh.
  • [ ] D. Connect Kinesis Data Streams to Amazon Kinesis Data Firehose. Use Kinesis Data Firehose to stage the data in Amazon S3. Use the COPY command to load the data from Amazon S3 to a table in Amazon Redshift.

描述

  • 有一公司想要有即時分析數據的能力
  • 使用 Amazon Kinesis Data Streams and Amazon Redshift 去消化資料
  • 每秒好幾 GB 的資料量
  • 在既有的 BI 方案上,疊加新的應用

解析

  • Use Kinesis Data Streams to stage data in Amazon S3 -> 服務錯了,應該是用 Firehose 塞 S3,不選 A
  • Access the data from Kinesis Data Streams by using SQL queries -> 服務錯了,應該是 Athena 才有這種功能
  • C 正確 (KDS -> Redshift)
  • D 流程合理,但不選是因為成本太高 (KDS -> KDF -> S3 -> Redshift)

Q28

A company uses an Amazon QuickSight dashboard to monitor usage of one of the company's applications. The company uses AWS Glue jobs to process data for the dashboard. The company stores the data in a single Amazon S3 bucket. The company adds new data every day. A data engineer discovers that dashboard queries are becoming slower over time. The data engineer determines that the root cause of the slowing queries is long-running AWS Glue jobs. Which actions should the data engineer take to improve the performance of the AWS Glue jobs? (Choose two.)

  • [x] A. Partition the data that is in the S3 bucket. Organize the data by year, month, and day.
  • [x] B. Increase the AWS Glue instance size by scaling up the worker type.
  • [ ] C. Convert the AWS Glue schema to the DynamicFrame schema class.
  • [ ] D. Adjust AWS Glue job scheduling frequency so the jobs run half as many times each day.
  • [ ] E. Modify the IAM role that grants access to AWS glue to grant access to all S3 features.

描述

  • 有一公司用 QuickSight 儀表板監控公司的應用
  • 公司用 Glue job 產出儀表板的資料
  • 用單一 S3 bucket 存放資料,每天更新
  • 於是儀表板資料的產生就越來越慢,而瓶頸是 Glue job
  • 如何改善?

解析

  • 因為資料都塞塞在一起,合理的做法就是根據日期切割資料
  • 探究變慢原因,當然要從資料讀取數量和硬體規格著手改善

Q29

A data engineer needs to use AWS Step Functions to design an orchestration workflow. The workflow must parallel process a large collection of data files and apply a specific transformation to each file. Which Step Functions state should the data engineer use to meet these requirements?

  • [ ] A. Parallel state
  • [ ] B. Choice state
  • [x] C. Map state Most
  • [ ] D. Wait state

描述

  • 一工程師用 Step Functions 設計工作流程
  • 平行處理大數據資料檔案
  • 有資料轉換的需求

解析

  • Step Functions
  • Map state 允許使用者定義單一執行路徑,平行處理資料
  • 文件中有提及:
  • Use the Map state in Distributed mode when you need to orchestrate large-scale parallel workloads that meet any combination of the following conditions:
    • The size of your dataset exceeds 256 KB.
    • The workflow's execution event history exceeds 25,000 entries.
    • You need a concurrency of more than 40 parallel iterations.

上一篇
【Day 26】 做題庫小試身手 - 7
下一篇
【Day 28】 做題庫小試身手 - 9
系列文
老闆,外帶一份 AWS Certified Data Engineer30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言